25 research outputs found

    Novel high performance techniques for high definition computer aided tomography

    Get PDF
    Mención Internacional en el título de doctorMedical image processing is an interdisciplinary field in which multiple research areas are involved: image acquisition, scanner design, image reconstruction algorithms, visualization, etc. X-Ray Computed Tomography (CT) is a medical imaging modality based on the attenuation suffered by the X-rays as they pass through the body. Intrinsic differences in attenuation properties of bone, air, and soft tissue result in high-contrast images of anatomical structures. The main objective of CT is to obtain tomographic images from radiographs acquired using X-Ray scanners. The process of building a 3D image or volume from the 2D radiographs is known as reconstruction. One of the latest trends in CT is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artefacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative reconstruction algorithms. There are numerous reconstruction algorithms available, from which we can highlight two specific types: traditional algorithms, which are fast but do not enable the obtaining of high quality images in situations of limited data; and iterative algorithms, slower but more reliable when traditional methods do not reach the quality standard requirements. One of the priorities of reconstruction is the obtaining of the final images in near real time, in order to reduce the time spent in diagnosis. To accomplish this objective, new high performance techniques and methods for accelerating these types of algorithms are needed. This thesis addresses the challenges of both traditional and iterative reconstruction algorithms, regarding acceleration and image quality. One common approach for accelerating these algorithms is the usage of shared-memory and heterogeneous architectures. In this thesis, we propose a novel simulation/reconstruction framework, namely FUX-Sim. This framework follows the hypothesis that the development of new flexible X-ray systems can benefit from computer simulations, which may also enable performance to be checked before expensive real systems are implemented. Its modular design abstracts the complexities of programming for accelerated devices to facilitate the development and evaluation of the different configurations and geometries available. In order to obtain near real execution times, low-level optimizations for the main components of the framework are provided for Graphics Processing Unit (GPU) architectures. Other alternative tackled in this thesis is the acceleration of iterative reconstruction algorithms by using distributed memory architectures. We present a novel architecture that unifies the two most important computing paradigms for scientific computing nowadays: High Performance Computing (HPC). The proposed architecture combines Big Data frameworks with the advantages of accelerated computing. The proposed methods presented in this thesis provide more flexible scanner configurations as they offer an accelerated solution. Regarding performance, our approach is as competitive as the solutions found in the literature. Additionally, we demonstrate that our solution scales with the size of the problem, enabling the reconstruction of high resolution images.El procesamiento de imágenes médicas es un campo interdisciplinario en el que participan múltiples áreas de investigación como la adquisición de imágenes, diseño de escáneres, algoritmos de reconstrucción de imágenes, visualización, etc. La tomografía computarizada (TC) de rayos X es una modalidad de imágen médica basada en el cálculo de la atenuación sufrida por los rayos X a medida que pasan por el cuerpo a escanear. Las diferencias intrínsecas en la atenuación de hueso, aire y tejido blando dan como resultado imágenes de alto contraste de estas estructuras anatómicas. El objetivo principal de la TC es obtener imágenes tomográficas a partir estas radiografías obtenidas mediante escáneres de rayos X. El proceso de construir una imagen o volumen en 3D a partir de las radiografías 2D se conoce como reconstrucción. Una de las últimas tendencias en la tomografía computarizada es la reducción de la dosis de radiación administrada a los pacientes a través de la reducción de la cantidad de datos adquiridos. Esta reducción da como resultado artefactos en las imágenes finales si se utilizan métodos de reconstrucción convencionales, por lo que es aconsejable emplear algoritmos de reconstrucción iterativos. Existen numerosos algoritmos de reconstrucción disponibles a partir de los cuales podemos destacar dos categorías: algoritmos tradicionales, rápidos pero no permiten obtener imágenes de alta calidad en situaciones en las que los datos son limitados; y algoritmos iterativos, más lentos pero más estables en situaciones donde los métodos tradicionales no alcanzan los requisitos en cuanto a la calidad de la imagen. Una de las prioridades de la reconstrucción es la obtención de las imágenes finales en tiempo casi real, con el fin de reducir el tiempo de diagnóstico. Para lograr este objetivo, se necesitan nuevas técnicas y métodos de alto rendimiento para acelerar estos algoritmos. Esta tesis aborda los desafíos de los algoritmos de reconstrucción tradicionales e iterativos, con respecto a la aceleración y la calidad de imagen. Un enfoque común para acelerar estos algoritmos es el uso de arquitecturas de memoria compartida y heterogéneas. En esta tesis, proponemos un nuevo sistema de simulación/reconstrucción, llamado FUX-Sim. Este sistema se construye alrededor de la hipótesis de que el desarrollo de nuevos sistemas de rayos X flexibles puede beneficiarse de las simulaciones por computador, en los que también se puede realizar un control del rendimiento de los nuevos sistemas a desarrollar antes de su implementación física. Su diseño modular abstrae las complejidades de la programación para aceleradores con el objetivo de facilitar el desarrollo y la evaluación de las diferentes configuraciones y geometrías disponibles. Para obtener ejecuciones en casi tiempo real, se proporcionan optimizaciones de bajo nivel para los componentes principales del sistema en las arquitecturas GPU. Otra alternativa abordada en esta tesis es la aceleración de los algoritmos de reconstrucción iterativa mediante el uso de arquitecturas de memoria distribuidas. Presentamos una arquitectura novedosa que unifica los dos paradigmas informáticos más importantes en la actualidad: computación de alto rendimiento (HPC) y Big Data. La arquitectura propuesta combina sistemas Big Data con las ventajas de los dispositivos aceleradores. Los métodos propuestos presentados en esta tesis proporcionan configuraciones de escáner más flexibles y ofrecen una solución acelerada. En cuanto al rendimiento, nuestro enfoque es tan competitivo como las soluciones encontradas en la literatura. Además, demostramos que nuestra solución escala con el tamaño del problema, lo que permite la reconstrucción de imágenes de alta resolución.This work has been mainly funded thanks to a FPU fellowship (FPU14/03875) from the Spanish Ministry of Education. It has also been partially supported by other grants: • DPI2016-79075-R. “Nuevos escenarios de tomografía por rayos X”, from the Spanish Ministry of Economy and Competitiveness. • TIN2016-79637-P Towards unification of HPC and Big Data Paradigms from the Spanish Ministry of Economy and Competitiveness. • Short-term scientific missions (STSM) grant from NESUS COST Action IC1305. • TIN2013-41350-P, Scalable Data Management Techniques for High-End Computing Systems from the Spanish Ministry of Economy and Competitiveness. • RTC-2014-3028-1 NECRA Nuevos escenarios clinicos con radiología avanzada from the Spanish Ministry of Economy and Competitiveness.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Daniel García Sánchez.- Secretario: Katzlin Olcoz Herrero.- Vocal: Domenico Tali

    Accelerated iterative image reconstruction for cone-beam computed tomography through Big Data frameworks

    Get PDF
    One of the latest trends in Computed Tomography (CT) is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artifacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative algorithms to enhance image quality. Most approaches are built around two main operators, backprojection and projection, which are computationally expensive. In this work, we present an implementation of those operators for iterative reconstruction methods exploiting the Big Data paradigm. We define an architecture based on Apache Spark that supports both Graphical Processing Units (GPU) and CPU-based architectures. The aforementioned are parallelized using a partitioning scheme based on the division of the volume and irregular data structures in order to reduce the cost of communication and computation of the final images. Our solution accelerates the execution of the two most computational expensive components with Apache Spark, improving the programming experience of new iterative reconstruction algorithms and the maintainability of the source code increasing the level of abstraction for non-experienced high performance programmers. Through an experimental evaluation, we show that we can obtain results up to 10 faster for projection and 21 faster for backprojection when using a GPU-based cluster compared to a traditional multi-core version. Although a linear speed up was not reached, the proposed approach can be a good alternative for porting previous medical image reconstruction applications already implemented in C/C++ or even with CUDA or OpenCL programming models. Our solution enables the automatic detection of the GPU devices and execution on CPU and GPU tasks at the same time under the same system, using all the available resources.This work was supported by the NIH, United States under Grant R01-HL-098686 and Grant U01 EB018753, the Spanish Ministerio de Economia y Competitividad (projects TEC2013-47270-R, RTC-2014-3028 and TIN2016-79637-P), the Spanish Ministerio de Educacion (grant FPU14/03875), the Spanish Ministerio de Ciencia, Innovacion y Universidades (Instituto de Salud Carlos III, project DTS17/00122; Agencia Estatal de Investigacion, project DPI2016-79075-R-AEI/FEDER, UE), co-funded by European Regional Development Fund (ERDF), ‘‘A way of making Europe’’. The CNIC is supported by the Ministerio de Ciencia, Spain, Innovacion y Universidades, Spain and the Pro CNIC Foundation, Spain, and is a Severo Ochoa Center of Excellence, Spain (SEV-2015-0505). Finally, this research was partially supported by Madrid regional Government, Spain under the grant ’’Convergencia Big data-Hpc: de los sensores a las Aplicaciones. (CABAHLA-CM)’’. Ref: S2018/TCS-4423

    Propuesta arquitectónica para la ejecución de tareas en Apache Spark para entornos heterogéneos

    Get PDF
    Las desventajas presentes en las plataformas de computación actuales y la fácil migración a la computación en la nube, han logrado que cada vez más aplicaciones científicas se adapten a los distintos frameworks de computación distribuida basadas en flujo de tareas. Sin embargo, muchas de ellas ya han sido optimizadas para su ejecución en aceleradores tales como GPUs. En este trabajo se presenta una arquitectura que facilita la ejecuión de aplicaciones tradicionalmente basadas en entornos HPC al nuevo paradigma de computación Big Data. Además, se demuestra cómo gracias a una mayor capacidad de memoria, el reparto automático de tareas y a la mayor potencia de cálculo de los sistemas heterogéneos se puede converger a un nuevo modelo de ejecución altamente distribuido. En ese trabajo se presenta un estudio de viabilidad de esta propueta mediante la utilización de GPUs dentro de la infraestructura de cómputo Spark. Esta arquitectura será evaluada a través de una aplicación de tratamiento de imagen médica. Los resultados demuestran que aunque nuestra arquitectura sobre un nodo no produce resultados absolutos mejores que la aplicación original, según se aumenta el número de GPUs y por lo tanto la ocupación de estas influye más la aplicación basada en Spark se acerca al rendimiento del simulador original. Finalmente, realizamos un estudio de la ocupación de las GPUs empleadas para las distintas políticas propuestas, demostrando que al tener en cuenta las características dinámicas de la GPUs (número de tareas en ejecución) podemos tener una mayor ganancia de rendimiento.Este trabajo ha sido parcialmente subvencionado por el Ministerio de Economia y Competitividad, bajo el proyecto TIN2013-41350-P, Scalable Data Management Techniques for High-End Computing Systems

    GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems

    Get PDF
    Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. Results: We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (> 1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Conclusion: Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48x) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.This work has been supported by TEC2013-47270-R, RTC-2014-3028-1, TIN2016-79637-P (Spanish Ministerio de Economia y Competitividad), DPI2016-79075-R (Spanish Ministerio de Economia, Industria y Competitividad), CIBER CB07/09/0031 (Spanish Ministerio de Sanidad y Consumo), RePhrase 644235 (European Commission) and grant FPU14/03875 (Spanish Ministerio de Educacion, Cultura y Deporte)

    FUX-Sim: implementation of a fast universal simulation/reconstruction framework for X-ray systems

    Get PDF
    The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements.This work was funded by the projects TEC2013-47270-R, RTC-2014-3028-1, TIN2016-79637-P, DPI2016-79075-R, and the Cardiovascular Research Network (RIC, RD12/0042/0057) from the Spanish Ministerio de Economía y Competitividad (www.mineco.gob.es/) and, FPU14/03875 grant from the Spanish Ministerio de Educación, Cultura y Deporte (http://www.mecd.gob.es). We also thank NVidia for providing the Tesla K40 device used to perform the experiments

    Bronchoscopist's perception of the quality of the single-use bronchoscope (Ambu aScope4™) in selected bronchoscopies: a multicenter study in 21 Spanish pulmonology services

    Get PDF
    Background: The disposable bronchoscope is an excellent alternative to face the problem of SARS-CoV-2 and other cross infections, but the bronchoscopist’s perception of its quality has not been evaluated. Methods: To evaluate the quality of the Ambu-aScope4 disposable bronchoscope, we carried out a cross-sectional study in 21 Spanish pulmonology services. We use a standardized questionnaire completed by the bronchoscopists at the end of each bronchoscopy. The variables were described with absolute and relative frequencies, measures of cen‑ tral tendency and dispersion depending on their nature. The existence of learning curves was evaluated by CUSUM analysis. Results: The most frequent indications in 300 included bronchoscopies was bronchial aspiration in 69.3% and the median duration of these was 9.1 min. The route of entry was nasal in 47.2% and oral in 34.1%. The average score for ease of use, image, and aspiration quality was 80/100. All the planned techniques were performed in 94.9% and the bronchoscopist was satisfed in 96.6% of the bronchoscopies. They highlighted the portability and immediacy of the aScope4TM to start the procedure in 99.3%, the possibility of taking and storing images in 99.3%. The CUSUM analysis showed average scores>70/100 from the frst procedure and from the 9th procedure more than 80% of the scores exceeded the 80/100 score

    Bronchoscopist's perception of the quality of the single-use bronchoscope (Ambu aScope4™) in selected bronchoscopies : a multicenter study in 21 Spanish pulmonology services

    Get PDF
    Background: The disposable bronchoscope is an excellent alternative to face the problem of SARS-CoV-2 and other cross infections, but the bronchoscopist's perception of its quality has not been evaluated. Methods: To evaluate the quality of the Ambu-aScope4 disposable bronchoscope, we carried out a cross-sectional study in 21 Spanish pulmonology services. We use a standardized questionnaire completed by the bronchoscopists at the end of each bronchoscopy. The variables were described with absolute and relative frequencies, measures of central tendency and dispersion depending on their nature. The existence of learning curves was evaluated by CUSUM analysis. Results: The most frequent indications in 300 included bronchoscopies was bronchial aspiration in 69.3% and the median duration of these was 9.1 min. The route of entry was nasal in 47.2% and oral in 34.1%. The average score for ease of use, image, and aspiration quality was 80/100. All the planned techniques were performed in 94.9% and the bronchoscopist was satisfied in 96.6% of the bronchoscopies. They highlighted the portability and immediacy of the aScope4TM to start the procedure in 99.3%, the possibility of taking and storing images in 99.3%. The CUSUM analysis showed average scores > 70/100 from the first procedure and from the 9th procedure more than 80% of the scores exceeded the 80/100 score. Conclusions: The aScope4™ scored well for ease of use, imaging, and aspiration. We found a learning curve with excellent scores from the 9th procedure. Bronchoscopists highlighted its portability, immediacy of use and the possibility of taking and storing images

    Recommendations for treatment with recombinant human growth hormone in pediatric patients in Colombia

    Get PDF
    En Colombia, actualmente no existen parámetros claros para el diagnóstico de pacientes con talla baja, ni sobre el tratamiento de esta población con hormona de crecimiento recombinante humana (somatropina), lo cual se ve favorecido por la diversidad de programas de formación de profesionales en endocrinología pediátrica. En respuesta a esta problemática se realizó el primer acuerdo colombiano de expertos en talla baja liderado por la Asociación Colegio Colombiana de Endocrinología Pediátrica (ACCEP); este trabajo contó con la participación y el aval de expertos clínicos de importantes instituciones de salud públicas y privadas del país, además de expertos metodológicos del instituto Keralty, quienes garantizaron la estandarización del uso de la somatropina. Después de realizar una minuciosa revisión de la literatura, se propone la unificación de definiciones, un algoritmo diagnóstico, los parámetros de referencia de las pruebas bioquímicas y dinámicas, una descripción de las consideraciones de uso de la somatropina para el tratamiento de las patologías con aprobación por la entidad regulatoria de medicamentos y alimentos en Colombia y, por último, un formato de consentimiento informado y de ficha técnica del medicamento.In Colombia there are no guidelines for diagnosis and management of patients with short stature and for the use of recombinanthuman growth hormone, mainly caused by the diversity of training centers in pediatric endocrinology. In response to this situation,the Asociación Colegio Colombiana de Endocrinología Pediátrica leds the first colombian short stature expert committee in order tostandardize the use of human recombinant growth hormone. This work had the participation and endorsement of a consortium ofclinical experts representing the Sociedad Colombiana de Pediatría, Secretaría Distrital de Salud de Bogotá- Subred Integrada deServicios de Salud Suroccidente, Fundación Universitaria Sanitas, Universidad de los Andes and some public and private healthinstitutions in the country, in addition to the participation of methodological experts from the Instituto Global de Excelencia ClínicaKeralty. By reviewing the literature and with the best available evidence, we proposed to unify definitions, a diagnostic algorithm,biochemical and dynamic tests with their reference parameters, a description of the considerations about growth hormone use amongthe indications approved by regulatory agency for medications and food in Colombia and finally a proposal for an informed consentand a medication fact sheet available for parents and patients.https://orcid.org/0000-0002-7856-7213https://orcid.org/0000-0003-2241-7854Revista Nacional - Indexad

    Patients with Crohn's disease have longer post-operative in-hospital stay than patients with colon cancer but no difference in complications' rate

    Get PDF
    BACKGROUNDRight hemicolectomy or ileocecal resection are used to treat benign conditions like Crohn's disease (CD) and malignant ones like colon cancer (CC).AIMTo investigate differences in pre- and peri-operative factors and their impact on post-operative outcome in patients with CC and CD.METHODSThis is a sub-group analysis of the European Society of Coloproctology's prospective, multi-centre snapshot audit. Adult patients with CC and CD undergoing right hemicolectomy or ileocecal resection were included. Primary outcome measure was 30-d post-operative complications. Secondary outcome measures were post-operative length of stay (LOS) at and readmission.RESULTSThree hundred and seventy-five patients with CD and 2,515 patients with CC were included. Patients with CD were younger (median = 37 years for CD and 71 years for CC (P < 0.01), had lower American Society of Anesthesiology score (ASA) grade (P < 0.01) and less comorbidity (P < 0.01), but were more likely to be current smokers (P < 0.01). Patients with CD were more frequently operated on by colorectal surgeons (P < 0.01) and frequently underwent ileocecal resection (P < 0.01) with higher rate of de-functioning/primary stoma construction (P < 0.01). Thirty-day post-operative mortality occurred exclusively in the CC group (66/2515, 2.3%). In multivariate analyses, the risk of post-operative complications was similar in the two groups (OR 0.80, 95%CI: 0.54-1.17; P = 0.25). Patients with CD had a significantly longer LOS (Geometric mean 0.87, 95%CI: 0.79-0.95; P < 0.01). There was no difference in re-admission rates. The audit did not collect data on post-operative enhanced recovery protocols that are implemented in the different participating centers.CONCLUSIONPatients with CD were younger, with lower ASA grade, less comorbidity, operated on by experienced surgeons and underwent less radical resection but had a longer LOS than patients with CC although complication's rate was not different between the two groups

    Voluntariado Social Sustentável Aprenda fazendo serviço comunitário

    No full text
    Obetivo principal de este primer proyecto piloto de “ApS Voluntariado Social Sostenible. Aprender haciendo un servicio a la comunidad” (en adelante ApS VSS) ha sido implementar el voluntariado social sostenible en la Facultad de Trabajo Social para desarrollar capacidades y competencias genéricas y específicas, que les permita a las y los participantes reafirmarse socialmente y que les capacite para transmitir la importancia de la Agenda 2030. Se fusionan varias capacidades, para canalizar acciones enmarcadas y ser embajadores/as de los Objetivos de Desarrollo Sostenible (en adelante ODS), teniendo presente el contexto de la COVID-19. A través de la metodología innovadora del Aprendizaje-Servicio se han conseguido los objetivos propuestos ya que se ha buscado promover un encuentro para el trabajo conjunto, desarrollando capacidades humanas a través de la conexión entre la individualidad de las personas, el trabajo grupal y comunitario. Los fundamentos del Trabajo Social han sido necesarios para trabajar un enfoque de derechos, valores y compromiso. Además, se han desarrollado habilidades para la vida desde la educación popular, abordando diversos enfoques y recursos como la educación eco-social, el diseño universal del aprendizaje, los entornos inclusivos, la lectura fácil y los huertos eco-didácticos. En el proyecto ha participado un grupo formado por estudiantes de diferentes cursos del grado y máster de Trabajo Social de la Universidad Complutense de Madrid y por personas externas a la universidad que tienen diversidad intelectual. Además, han participado de forma especial estudiantes de las facultades de Bellas Artes y de Estudios Estadísticos en actividades puntuales del proyecto. Como Servicio a la Comunidad, se propuso a los y las participantes el reto de que fueran embajadores/as de los Objetivos del Desarrollo Sostenible (ODS), compartiendo su mensaje al resto de estudiantado universitario y comunidad, mediante no solo el conocimiento científico, sino también desde el reconocimiento a los demás: cultura, saberes y procesos distintos a los nuestros. Para ello, y mediante el lenguaje de las artes escénicas y de la lectura fácil, los y las participantes, junto con la compañía de Teatro La Tramoya, prepararon una obra de teatro, adaptación de la obra de Miguel de Cervantes y Saavedra, “Quijote cabalga con los ODS” Por último, en la 1ª edición del proyecto de ApS “Voluntariado Social Sostenible. Aprender haciendo un servicio a la comunidad” como se han conseguido los objetivos propuestos e incluso en algunos casos ha superado las expectativas previstas, se puede resumir en las actividades que se presentan en el anexo 1. Además de la formación recibida al estudiantado inscrito/a en la actividad, tal y como se ha citado, se han materializado los ODS en la obra teatral y una actividad pedagógica en relación con los ODS.Depto. de Trabajo Social y Servicios SocialesFac. de Trabajo SocialTRUEOficina Universitaria ApS UCMsubmitte
    corecore